94 research outputs found

    Tensor Monte Carlo: particle methods for the GPU era

    Get PDF
    Multi-sample, importance-weighted variational autoencoders (IWAE) give tighter bounds and more accurate uncertainty estimates than variational autoencoders (VAE) trained with a standard single-sample objective. However, IWAEs scale poorly: as the latent dimensionality grows, they require exponentially many samples to retain the benefits of importance weighting. While sequential Monte-Carlo (SMC) can address this problem, it is prohibitively slow because the resampling step imposes sequential structure which cannot be parallelised, and moreover, resampling is non-differentiable which is problematic when learning approximate posteriors. To address these issues, we developed tensor Monte-Carlo (TMC) which gives exponentially many importance samples by separately drawing KK samples for each of the nn latent variables, then averaging over all KnK^n possible combinations. While the sum over exponentially many terms might seem to be intractable, in many cases it can be computed efficiently as a series of tensor inner-products. We show that TMC is superior to IWAE on a generative model with multiple stochastic layers trained on the MNIST handwritten digit database, and we show that TMC can be combined with standard variance reduction techniques

    Bayesian filtering unifies adaptive and non-adaptive neural network optimization methods

    Full text link
    We formulate the problem of neural network optimization as Bayesian filtering, where the observations are the backpropagated gradients. While neural network optimization has previously been studied using natural gradient methods which are closely related to Bayesian inference, they were unable to recover standard optimizers such as Adam and RMSprop with a root-mean-square gradient normalizer, instead getting a mean-square normalizer. To recover the root-mean-square normalizer, we find it necessary to account for the temporal dynamics of all the other parameters as they are geing optimized. The resulting optimizer, AdaBayes, adaptively transitions between SGD-like and Adam-like behaviour, automatically recovers AdamW, a state of the art variant of Adam with decoupled weight decay, and has generalisation performance competitive with SGD

    A Review of State-of-the-Art Large Sized Foam Cutting Rapid Prototyping and Manufacturing Technologies.

    Get PDF
    Purpose – Current additive rapid prototyping (RP) technologies fail to efficiently produce objects greater than 0.5?m3 due to restrictions in build size, build time and cost. A need exists to develop RP and manufacturing technologies capable of producing large objects in a rapid manner directly from computer-aided design data. Foam cutting RP is a relatively new technology capable of producing large complex objects using inexpensive materials. The purpose of this paper is to describe nine such technologies that have been developed or are currently being developed at institutions around the world. The relative merits of each system are discussed. Recommendations are given with the aim of enhancing the performance of existing and future foam cutting RP systems. Design/methodology/approach – The review is based on an extensive literature review covering academic publications, company documents and web site information. Findings – The paper provides insights into the different machine configurations and cutting strategies. The most successful machines and cutting strategies are identified. Research limitations/implications – Most of the foam cutting RP systems described have not been developed to the commercial level, thus a benchmark study directly comparing the nine systems was not possible. Originality/value – This paper provides the first overview of foam cutting RP technology, a field which is over a decade old. The information contained in this paper will help improve future developments in foam cutting RP systems

    Using the Finite Element Method to Determine the Temperature Distributions in Hot-wire Cutting.

    Get PDF
    Hot-wire cutting is a common material removal process used to shape and sculpt plastic foam materials, such as expanded polystyrene (EPS). Due to the low cost and sculpt-ability of plastic foams they are popular materials for large sized (> 1 mΒ³) prototypes and bespoke visual artefacts. Recent developments in robotic foam sculpting machines have greatly increased the ability of hot-tools to sculpt complex geometrical surfaces bringing the subject into the realm of subtractive rapid prototyping/manufacturing. Nevertheless foam cut objects are not being exploited to their full potential due to the common perception that hot-wires are a low accuracy cutting tool. If greater accuracy for hot-wires can be obtained, it could provide a low cost method of producing high value functional engineering parts. Polystyrene patterns for lost foam casting are one such possibility. A nonlinear transient thermal finite element model was developed with the purpose of predicting the kerf width of hot-wire cut foams. Accurate predictions of the kerfwidth during cutting will allow the tool paths to be corrected off-line at the tool definition stage of the CAM process. Finite element analysis software (ANSYS) was used to simulate the hot-wire plastic foam cutting. The material property models were compiled from experimental data and commonly accepted values found in literature. The simulations showed good agreement with the experimental data and thus the model is thought to be reliable. The simulations provide an effective method of predicting kerf widths, under steady state cutting conditions. Limitations and further developments to the model are described

    InfoNCE is a variational autoencoder

    Full text link
    We show that a popular self-supervised learning method, InfoNCE, is a special case of a new family of unsupervised learning methods, the self-supervised variational autoencoder (SSVAE). SSVAEs circumvent the usual VAE requirement to reconstruct the data by using a carefully chosen implicit decoder. The InfoNCE objective was motivated as a simplified parametric mutual information estimator. Under one choice of prior, the SSVAE objective (i.e. the ELBO) is exactly equal to the mutual information (up to constants). Under an alternative choice of prior, the SSVAE objective is exactly equal to the simplified parametric mutual information estimator used in InfoNCE (up to constants). Importantly, the use of simplified parametric mutual information estimators is believed to be critical to obtain good high-level representations, and the SSVAE framework naturally provides a principled justification for using prior information to choose these estimators
    • …
    corecore